Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Softmax LUT Optimization #570

Merged
merged 2 commits into from
Aug 12, 2022
Merged

Conversation

bo3z
Copy link
Contributor

@bo3z bo3z commented Jun 16, 2022

This PR exploits the following properties of the current Softmax implementation:

  1. For all inputs, the maximum is subtracted, before calculating the exponential. Therefore the exponential lookup table need only have entries for negative inputs.
  2. The sum of exponential is always positive. Therefore, the invert table need only have positive values, not negative.

The following plot shows the accuracy of hls4ml (old vs. new) compared to Keras at identifying argmax - the accuracy is identical for both implementations:
acc

The following plot shows the mean absolute percentage error of hls4ml (old vs. new) compared to Keras's output - new version performs slightly better compared to the old one due to more "relevant" elements in the LUT:
mape

Finally, the arrays holding the differences between the elements and the maximum is removed (as no significant accuracy was achieved), so resources were saved, as seen below:
softmax_results

@bo3z bo3z requested a review from vloncar June 16, 2022 14:42
@bo3z
Copy link
Contributor Author

bo3z commented Aug 4, 2022

@thesps maybe you could review this PR, since you reviewed the first Quartus Softmax implementation?

@vloncar vloncar merged commit 9af6106 into fastmachinelearning:main Aug 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants